skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Mallik, Anik"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Edge computing has enabled users to experience ubiquitous artificial intelligence (AI) through distributed learning and inference. Continuous efforts to reduce computing burden from the edge devices and increased preservation of privacy have popularized remote inference and federated learning (FL). However, network-level side-channel information can still expose sensitive operational states. In this paper, we demonstrate that network-level telemetry data can be used to fingerprint edge-assisted learning and inference workflows and reveal their operational phases. We propose a hierarchical classification framework, where the first stage separates learning from inference, and the second distinguishes learning phases. In addition, we develop a testbed with convolutional and recurrent neural network-based FL and remote inference systems, alongside an attacker device collecting network sniffing data. Using features derived from flow volumes, transfer speeds, ratios, and latency, the system achieves fingerprinting accuracy of 100% between learning and inference tasks, and 95.9% across different learning phases. These results highlight the vulnerability of edge-assisted distributed AI systems to network-based side-channel fingerprinting. 
    more » « less
  2. Federated Learning (FL) enables collaborative model training across distributed devices while safeguarding data and user privacy. However, FL remains susceptible to privacy threats that can compromise data via direct means. That said, indirectly compromising the confidentiality of the FL model architecture (e.g., a convolutional neural network (CNN) or a recurrent neural network (RNN)) on a client device by an outsider remains unexplored. If leaked, this information can enable next-level attacks tailored to the architecture. This paper proposes a novel side-channel fingerprinting attack, leveraging flow-level and packet-level statistics of encrypted wireless traffic from an FL client to infer its deep learning model architecture. We name it FLARE, a fingerprinting framework based on FL Architecture REconnaissance. Evaluation across various CNN and RNN variants—including pre-trained and custom models trained over IEEE 802.11 Wi-Fi—shows that FLARE achieves over 98% F1-score in closed-world and up to 91\% in open-world scenarios. These results reveal that CNN and RNN models leak distinguishable traffic patterns, enabling architecture fingerprinting even under realistic FL settings with hardware, software, and data heterogeneity. To our knowledge, this is the first work to fingerprint FL model architectures by sniffing encrypted wireless traffic, exposing a critical side-channel vulnerability in current FL systems. 
    more » « less
  3. Large Language Models (LLMs) offer significant potential for enhancing smart home assistants through more natural and responsive automation. However, solely relying on cloud-based LLMs raises concerns about network dependency and user privacy. In this paper, we conduct a comprehensive evaluation to assess the feasibility of deploying LLMs on low-cost devices, enabling smart home automation in terms of latency and energy consumption. In particular, we first build a comprehensive and reproducible testbed that integrates benchmark and well-trained LLMs with the Home Assistant platform on low-cost edge devices, such as Raspberry Pi 4 and Pi 5. Leveraging this testbed, we evaluate on-device LLM inference performance in terms of inference latency, energy consumption, and thermal characteristics, and provide theoretical estimates of remaining runtime in battery-powered settings. We evaluate the on-device performance on two benchmark quantized models (e.g., Home-1B-v3 and Home-3B-v3) for smart homes. Experimental results show that the Raspberry Pi 5 significantly outperforms the Raspberry Pi 4 in terms of low latency and high energy efficiency across both models, due to its superior processing capabilities. In particular, for the Home-1B-v3 model, the mean inference energy decreases from 264.3 J to 133.9 J, representing a 49.3% reduction in consumption, and the mean latency decreases from 40.3 s to 18.7 s, resulting in a 53.4% improvement in delay in Pi 5 compared to Pi 4. The findings of the paper provide valuable empirical insights for energy and latency-aware smart home automation using quantized LLMs on low-cost edge hardware. This work lays the groundwork for future research on the energy efficiency of LLM applications. 
    more » « less